2 research outputs found

    Visual feedback for humans about robots' perception in collaborative environments

    Get PDF
    During the last years, major advances on artificial intelligence have successfully allowed robots to perceive their environment, which not only includes static but also dynamic objects such as humans. Indeed, robotic perception is a fundamental feature to achieve safe robots' autonomy in human-robot collaboration. However, in order to have true collaboration, both robots and humans should perceive each other’s intentions and interpret which actions they are performing. In this work, we developed a visual representation tool that illustrates the robot's perception of the space that is shared with a person. Specifically, we adapted an existent system to estimate the human pose, and we created a visualisation tool to represent the robot's perception about the human-robot closeness. We also performed a first evaluation of the system working in realistic conditions using the Tiago robot and a person as a test subject. This work is a first step towards allowing humans to have a better understanding about robots' perception in collaborative scenarios.Peer ReviewedPreprin

    Aprendizagem profunda com imagens RGB-D: classificação de objetos e estimativa de pose

    No full text
    As part of the doctoral thesis, the objective is to develop a Human-Machine Interface to control a assistive robotic arm with more than 6 degrees of freedom. Deep learning techniques for object recognition and pose estimation in order to be able to interact with them is presented. Three multimodal convolutional neural network models were implemented using RGB-D images from the BigBIRD database. Each model have three classification outputs:  22 Objects - 5 Cameras - 8 Rotation. The best of model achieved 96% accuracy for objects, 98% for camera and 56% for rotation.En el marco de la tesis de doctorado, se plantea como objetivo el desarrollo de una Interfaz Hombre-Máquina para comandar un brazo robótico asistencial de más de 6 grados de libertad. Se presenta el uso de técnicas de aprendizaje profundo para el reconocimiento de objetos y estimación de la pose a fin de poder interactuar con los mismos. Se implementaron 3 modelos de redes neuronales convolucionales multimodales para imágenes RGB-D de la base de datos BigBIRD, con tres salidas de clasificación: 22 Objetos - 5 Cámaras - 8 etiquetas de Rotación. Para el mejor de los modelos se alcanzaron valores de precisión de 96% para objetos, 98% para cámara y 56% para la rotación.No âmbito da tese de doutoramento, o objetivo é desenvolver uma Interface Homem-Máquina para controlar um braço de assistência robótica com mais de 6 graus de liberdade. É apresentado o uso de técnicas de aprendizado profundo para reconhecimento de objetos e estimativa de pose para interagir com eles. 3 modelos de redes neurais convolucionais multimodais foram implementados para imagens RGB-D do banco de dados BigBIRD, com três saídas de classificação: 22 objetos - 5 câmeras - 8 rótulos de rotação. Para o melhor dos modelos, foram alcançados valores de precisão de 96% para objetos, 98% para câmera e 56% para rotação
    corecore